Improvements to Training an RNN parser

نویسندگان

  • Richard Billingsley
  • James Curran
چکیده

Many parsers learn sparse class distributions over trees to model natural language. Recursive Neural Networks (RNN) use much denser representations, yet can still achieve an F-score of 92.06% for right binarized sentences up to 15 words long. We examine an RNN model by comparing it with an abstract generative probabilistic model using a Deep Belief Network (DBN). The DBN provides both an upwards and downwards pointing conditional model, drawing a connection between RNN and Charniak type parsers, while analytically predicting average scoring parameters in the RNN. In addition, we apply the RNN to longer sentences and develop two methods which, while having negligible effect on short sentence parsing, are able to improve the parsing F-Score by 0.83% on longer sentences.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incremental Recurrent Neural Network Dependency Parser with Search-based Discriminative Training

We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010)...

متن کامل

Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold

We present a new, efficient framesemantic parser that labels semantic arguments to FrameNet predicates. Built using an extension to the segmental RNN that emphasizes recall, our basic system achieves competitive performance without any calls to a syntactic parser. We then introduce a method that uses phrasesyntactic annotations from the Penn Treebank during training only, through a multitask ob...

متن کامل

Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks

We present expected F-measure training for shift-reduce parsing with RNNs, which enables the learning of a global parsing model optimized for sentence-level F1. We apply the model to CCG parsing, where it improves over a strong greedy RNN baseline, by 1.47% F1, yielding state-of-the-art results for shiftreduce CCG parsing.

متن کامل

Transforming Dependencies into Phrase Structures

PTB 23 Model F1 Sent./s. Charniak (2000) 89.5 Stanford PCFG (2003) 85.5 5.3 Petrov (2007) 90.1 8.6 Zhu (2013) 90.3 39.0 Carreras (008) 91.1 CJ Reranking (2005) 91.5 4.3 Stanford RNN (2013) 90.0 2.8 PAD 90.6 34.3 PAD (Pruned) 90.5 58.6 CTB 5 Model F1 Charniak (2000) 80.8 Bikel (2004) 80.6 Petrov (2007) 83.3 Zhu (2013) 83.2 PAD 82.4 Experiments Contributions • A phrase-structure parser (PAD) achi...

متن کامل

Using Parallel Features in Parsing of Machine-Translated Sentences for Correction of Grammatical Errors

In this paper, we present two dependency parser training methods appropriate for parsing outputs of statistical machine translation (SMT), which pose problems to standard parsers due to their frequent ungrammaticality. We adapt the MST parser by exploiting additional features from the source language, and by introducing artificial grammatical errors in the parser training data, so that the trai...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012